聴覚、前庭感覚
Auditory and Vestibular Systems
P1-1-114
上丘・聴覚皮質神経細胞の刺激選択性の情報量最大化モデルによる再現
Information maximization explains the stimulus selectivity of auditory related neurons in the brainstem and cortex

○田中琢真1, 中村清彦1
○Takuma Tanaka1, Kiyohiko Nakamura1
東工大院・総理工・知能システム科学1
Dept Computational Intelligence and Systems Science, Tokyo Tech, Yokohama1

Primary auditory neurons respond to simple acoustic stimuli such as white noise, pure tones, and clicks. The response of a primary auditory neuron is largely predictable from its characteristic frequency, the frequency of the tone to which the auditory neurons are selective, and threshold.Neurons in the brainstem auditory nuclei including cochlear nucleus exhibit more complex receptive field properties such as tuning curves with a zone of activation surrounded by a zone of inhibition. In contrast to primary auditory neurons and cochlear nucleus neurons, a majority of inferior colliculus neurons exhibits non-monotonic intensity functions, that is, the firing rate of these neurons varies monotonically as a function of sound intensity. In addition, a fraction of inferior colliculus neurons is selective to frequency-modulated sounds. Cortical neurons are selective to more complex features including pitch of the sound, intensity of sounds, and conspecific vocalizations. These experimental results revealed that the auditory information is processed by the hierarchical pathway of the auditory regions in the brainstem and cortex. Hence, we modeled the auditory pathway as a multilayer feedforward network. Using natural sounds and human vocalizations as input, we performed the learning process in which the information transmitted from the input layer to the output layers is maximized. After learning, the first-output-layer neuorns in the model network exhibited tuning curves similar to primary auditory neurons. Most second-output-layer neurons became selective to the frequency modulation of sounds and the pitch, and a majority of these neurons exhibit non-monotonic intensity functions. These results suggest that the neurons in the central auditory pathway acquire selectivity to complex features of sounds in order to maximize information conveyed by the spike discharges.
P1-1-115
音脈分凝にかかわる聴覚野の機能的サブネットワーク構造の抽出
Extraction of functional subnetwork structure for auditory stream segregation in auditory cortex

○野田貴大1, 神崎亮平1, 高橋宏知1,2
○Takahiro Noda1, Ryohei Kanzaki1, Hirokazu Takahashi1,2
東京大学 先端科学技術研究センター1, JST さきがけ2
Research Center for Advanceed Science and Technology, Univ of Tokyo, Tokyo1, JST PRESTO2

Auditory system can orchestrate or disorganize a specific acoustic object, i.e., auditory stream from complex acoustic stimuli. An alternating tone sequence differing in frequency (ABA-tone sequence) induced different streams depending on the frequency differences (ΔFs) between the A and B tones and the inter-tone intervals (ITIs) between successive tones. The tone sequences are more likely perceived as segregated streams of A-A-A- and B---B--- with a larger ΔF and shorter ITI, while an integrated stream of ABA-ABA- is perceived with the smaller ΔF irrespective of ITI.The underlying mechcanism of the perceptual organization may include synchronous activity patterns across neuronal ensembles (Engel, 2001). Our previous study suggests that a subnetwork selective to the frequency of segregated-stream like tones persists in functional network of auditory cortex (AC), where the functional connectivity was based on phase synchronization of gamma-LFP oscillation between recording sites. However, whether the functional subnework structure is dependent on tonotopy or not in auditory cortex has not been explicitly examined yet.The present study extracted the functional subnetwork structure with community detection method in graph theory, and investigated both the dependency of the subnetwork structure on tonotopy and its change with ΔF and ITI inducing segregated or integrated stream-like change.As a result, communities in the functional network based on gamma-phase synchrony were distributed strongly dependent on tonotopy. Furthermore, under segregated stream-like condition, the CF distribution of community with its average CF selective to frequency of each segregated stream tends to be concentrated and localized in a specific CF, different from that under integrated stream-like condition. These findings suggest that the functional subnetwork in auditory cortex contribute to neuronal representation of auditory stream segregation.
P1-1-116
ショウジョウバエ中枢神経系における聴覚神経回路の解析
The auditory circuit in the central nervous system of Drosophila

○松尾恵倫子1, 関治由2, 浅井智紀1, 森本高子2, 宮川博義2, 伊藤啓3, 上川内あづさ1,4
○Eriko Matsuo1, Haruyoshi Seki2, Tomonori Asai1, Takako Morimoto2, Hiroyoshi Miyakawa2, Kei Ito3, Azusa Kamikouchi1,4
名古屋大・院理1, 東京薬科大・生命科2, 東京大・分生研3, JSTさきがけ4
Graduate School of Science, Nagoya University, Aichi1, Tokyo University of Pharmacy and Life Sciences, School of Life Sciences, Tokyo2, Institute of Molecular and Cellular Biosciences, The University of Tokyo, Tokyo3, Decoding and controlling Brain Information, PRESTO, JST, Japan4

Many animals are exposed to various sounds in every aspect of life. From such noisy environment animals can distinguish specific acoustic information to communicate with each other. To understand how the auditory mechanisms work in acoustic communication of animals, we had started a systematic analysis of the auditory circuit in the central nervous system of fruit flies. The fruit fly is an excellent model for a neural systems analysis because of its small brain and powerful genetic tools for manipulating the neural function and activity. Acoustic signals relay to the primary auditory center, from where secondary auditory neurons project to higher-order centers in the fly brain. All secondary auditory neurons so far identified in the fly brain target to a brain region called the inferior ventrolateral protocerebrum. Whether other brain regions and the ventral nerve cord receive input from the secondary auditory neurons, however, is unclear. To establish a comprehensive map of the secondary auditory circuit, we screened a collection of 3,939 GAL4 fly strains. Thirteen strains were selected that label neurons connecting the primary auditory center and other areas of the central nervous system. By using a standard FLP-FRT recombination technique, we identified distinct types of neurons that innervate various brain regions, e.g. centers for other sensory modalities. We also identified neurons that project directly to the thoracic ganglia. The makers for the pre- and post- synaptic sites of these neurons are distributed in various brain regions, containing the primary auditory center. These results suggest (1) a parallel processing system of acoustic information via the main auditory pathway, (2) integration of auditory and other sensory information in particular brain regions, and (3) direct descending pathways from the primary auditory center to the thoracic ganglia in flies.
P1-1-117
ショウジョウバエ脳における聴覚神経細胞の応答特性の解明
Elucidation of the response characteristics of auditory nerve cells in the Drosophila brain

○山田大智1, 松尾恵倫子1, 上川内あづさ1,2
○Daichi Yamada1, Eriko Matsuo1, Azusa Kamikouchi1,2
名古屋大学大学院 理学研究科 生命理学1, JST さきがけ2
Graduate School of Science, Nagoya University1, Decoding and controlling Brain Information, PRESTO, JST2

Similar to our inner ear, the antennal ear of the fruit fly Drosophila melanogaster serves as a complex detector in which specialized clusters of cells have distinct mechanosensory responsibilities. The fly's ear, so-called Johnston's organ (JO), comprises five anatomically distinct neural subgroups, A to E. Among these five subgroups, A and B neurons are vibration sensitive whereas C and E neurons are tuned to static deflections, serving as sound and gravity/wind detectors, respectively. Response characteristic of subgroup-D neurons, however, remains unclear.
Subgroup-D JO neurons have characteristic neuroanatomic properties: their cell bodies form two clusters in JO, from where they send axonal projections to the posterior end of the brain. To reveal the response property of subgroup-D neurons, we combined the GCaMP3 calcium imaging technique with our electrostatic method to actuate the antennal receiver. Sinusoidal actuation of the receiver induced neural activity in A and B neurons as reported previously, validating our method to monitor the response property of specific neurons in the brain. Flies' receivers were then exposed to simple stimuli such as static deflections or sinusoidal vibration, and to more complex stimuli such as the courtship song or an agonistic sound. In this meeting we will discuss the response characteristics, and putative functional roles, of subgroup-D neurons of the fly's ear.
P1-1-118
下丘外側皮質から視蓋前域ー中脳網様体への投射
Projection from the external cortex of the inferior colliculus to the pretectum-midbrain reticular formation

○井之口文月1, 瀧公介1, 相見良成1, 玉川俊広1, 木村智子1, 西村美紀1, 本間智1, 工藤基1
○Fuduki Inoguchi1, Kousuke Taki1, Yoshinari Aimi1, Toshihiro Tamagawa1, Tomoko Kimura1, Miki Nishimura1, Satoru Honma1, Motoi Kudo1
滋賀医科大学 医学部 解剖学1
Dept Anat, Shiga Univ of Med Sci, Otsu, Japan1

[Introduction] Inferior colliculus (IC) is the largest auditory brainstem consisting of the central nucleus, the external cortex (ECIC), and the dorsal cortex. The pretectum and midbrain reticular formation (Pt-MRF), a visual reflex center, is located medial to the medial geniculate (MG). The Pt-MRF lesions in the rat severely attenuate the acquisition of long-term habituation of the startle response when the lesions made prior to habituation training. These findings suggest that the Pt-MRF plays an important role in the audio-visual intermodal interaction within spatial perception. A considering number of neurons in the IC projecting to the MG are GABAergic. Our previous study has shown that the Pt-MRF receives direct projection from shell of the IC including the ECIC (Kudo et al., 1983). This belt pathway is fear potentiated startle center having glutamatergic transmission (Zhao and Davis, 2004). We aimed at studying the projection from the ECIC to the Pt-MRF using the double fluorescent labeling techniques.
[Method] Alexa Fluoro555 conjugated choleratoxin B subunit are injected into the Pt-MRF in the rat. After 3-5 days' survival, rats were fixed and the brain slices containing the IC were treated with double or triple fluorescent method with anti-GABA and/or anti-vGluT2 antibodies. Alexa Fluoro555-labelled neurons were studied which they have GABA-immunoreactivity and/or surrounded by vGluT2 immunoreactive axon-terminals.
[Results] 1) Neurons retrogradely labeled were distributed almost exclusively in layer II of the ECIC. Among these projection neurons in the ECIC, GABA containing neurons were very small in number. These neurons were located in the ventral part of the ECIC. 2) The GABAergic neurons were distributed exclusively in the ventral part of the ECIC. 3) Some of these GABAergic neurons were surrounded by VGluT2 immunoreactivity, and others were not.
Supported by JSPS KAKENHI Grant Number 22791593 and 24592513.
P1-1-119
音及び電気刺激が誘発するラット聴覚皮質の神経活動伝搬特性 -膜電位感受性色素による光計測-
Voltage-sensitive dye imaging of unidirectional dual-component propagation of activity evoked by sound and electrical stimulation in rat multiple auditory cortical fields

○西川淳1, 能登将成1, 舘野高1
○Jun Nishikawa1, Masanari Noto1, Takashi Tateno1
北大院・情報科学1
Grad Sch Info Sci Tech, Hokkaido Univ, Sapporo1

The auditory cortex has been divided into several regions, including anterior, posterior, suprarhial auditory fields (AAF, PAF, and SRAF, respectively) around the primary auditory cortex (AI), on the basis of electrophysiological studies of response properties. How a sound or electrical stimulus is represented dynamically in the cortex in space and time, though, is largely unclear. In this study, we aim to directly measure the spatiotemporal dynamics of activity propagation among the multiple fields. Voltage-sensitive dye imaging (VSDI) in exposed and stained left auditory cortex of anesthetized rats showed that, activity-initiation sites were systematically shifted with different tone stimuli, indicating a tonotopic organization. We also found that the tone-evoked activation consisted of dual components propagating in caudal and rostral fields. In the caudal component, evoked activity emerged in the AI tonotopic area, propagated across the isofrequency strips, and finally reached ventral AAF or SRAF. In the rostral component, in contrast, activity appeared in the AAF tonotopic area, spread across the isofrequency strips, and ended up in PAF. To examine preferred selection of the activity propagation among the multiple fields, VSDI was combined with single electrical microstimulation at multiple sites. The results demonstrated that microstimulation tended to evoke propagation in the same directions as the tone stimulation. These results indicate that stimulus-induced interactions among auditory subfields are not uniform, but propagate in a unidirectional fashion. In conclusion, we found a close correspondence of unidirectional dual-component propagation of activity to both sound and electrical stimulation. This opens up a possibility to develop an electronic device directly implanted in the auditory cortex for hearing-impaired patients.
P1-1-120
和音の協和度がラット聴皮質の位相同期に及ぼす影響
Modulation of the chord consonance on phase synchrony in the rat auditory cortex

○白松-磯口知世1,2, 野田貴大2, 神崎亮平2, 高橋宏知2,3
○Tomoyo Shiramatsu-Isoguchi1,2, Takahiro Noda2, Ryohei Kanzaki2, Hirokazu Takahashi2,3
東京大院・情理1, 東京大・先端研2, JST さきがけ3
Grad school of Info Sci and Tech, Univ of Tokyo, Tokyo, Japan1, Res Cent for Adv Sci and Tech, Univ of Tokyo, Tokyo, Japan2, PRESTO, JST, Saitama, Japan3

Many sounds in the environment have a rich sound spectrum, which is related to the qualia of the sound such as consonance or dissonance of chords. However, the information processing of the consonance of the chord in the auditory cortex is not fully understood. In this study, we targeted a phase locking value (PLV) of local field potential (LFP) and investigated whether PLV represents consonance of the chord consisting of two pure tones. A microelectrode array with a grid of 96 recording sites recorded LFPs from the fourth layer of the auditory cortex of anesthetized rats in response to continuous pure tones and chords. In the pure tone condition, we randomly presented 9 pure tones (12, 13.5, 14.4, 15, 16, 18, 19.2, 20, and 24 kHz, 60 dB SPL) for 30 seconds continuously, each of which was interleaved with a silent block of 30 seconds. In the chord condition, we made 8 chords by combining 12-kHz pure tone and one of other 8 pure tones, and presented them in the same way. From the recorded LFPs, we calculated PLVs in 5 bands (theta, 4 - 8 Hz; alpha, 8 - 14 Hz; beta, 14 - 30 Hz; low gamma, 30 - 40 Hz; high gamma, 60 - 80 Hz). Then, a difference of median of PLV within the auditory cortex in the stimulus block with respect to the median of PLV in the adjacent silent block was calculated as ΔPLV. Finally, the difference between ΔPLV of a chord and the higher-frequency composition tone of the chord was evaluated in consonant (high composition, 15, 16, 18 or 20 kHz) and dissonant chords (high composition, 13.5, 14.4 or 19.2 kHz). ΔPLVs of consonant chord were significantly larger than ΔPLVs of pure tone in all bands, while ΔPLVs of dissonant chord were not significantly larger except alpha band. Especially in beta, low gamma and high gamma band, ΔPLV increase was larger in consonant than dissonant chord. These results suggest that the phase synchrony within the auditory cortex is modulated according to the complex sound spectrum, such as consonance of the chord.
P1-1-121
コウモリのエコーロケーションにおける背景信号に埋もれた昆虫情報検知の神経機構
Neural mechanism of decoding insect information embedded in background signal in echolocating bat

○加藤貴之1, 樫森与志喜1,2
○Takayuki Katou1, Yoshiki Kashimori1,2
電気通信大学大学院 情報システム学研究科 情報メディアシステム学専攻1, 電気通信大学大学院 情報理工学研究科 先進理工学専攻2
Graduate School of Information Systems, University of Electro-Communications, Chofu, Tokyo, Japan1, Department of Engineering Science, University of Electro-Communications, Chofu, Tokyo, Japan2

Most species of bats making echolocation use the sound pressure level and Doppler-shifted frequency of ultrasonic echo pulse to measure the size and velocity of target. The neural circuits for detecting these target features are specialized for amplitude and frequency analysis of the second harmonic constant frequency component of Doppler-shifted echoes. In natural situation, bats have to detect accurately the detailed information of a flying insect, embedded in background signal reflecting from large natural objects like bushes and trees. However, it is not yet clear how bat distinguishes target information from information of background signal. To address this issue, we developed a neural network model for detecting Doppler-shifted frequency of echo sound. The model contains the networks of cochlea, inferior colliculus (IC), and Doppler-shifted constant frequency (DSCF) processing area. The IC neurons are subjected to an oscillation signal in order to enhance the contrast of the echo signals, evoking an oscillation of the subthreshold membrane potentials. The DSCF network has two types of sub-networks detecting AC and DC components of echo sound, which represent the information about insect's fluttering and background signal, respectively. Using the model, we show that AC and DC components of echo signal are well discriminated by the two sub-networks of DSCF. The information of the AC component is decoded by the short-term synaptic plasticity of the connections between IC neurons and neurons of the AC area. The present model offers the functional roles of IC and DSCF in decoding insect information in bat's brain.
P1-1-122
混合音中で複数回出現する未知の音の検出に選択的注意は必要か
Does recovering novel sound sources from embedded repetition require directed attention?

○益冨恵子1,2, 柏野牧夫1,2
○Keiko Masutomi1,2, Nicolas Barascud3, Tobias Overath3, Makio Kashino1,2, Josh H. McDermott4, Maria Chait3
東工大院・総合理工1, NTTコミュニケーション科学基礎研2
Grad Sch of Sci and Eng, Tokyo Tech, Kanagawa, Japan1, NTT Communication Science Labs, Kanagawa, Japan2, UCL Ear Institute, London, UK3, Dept Brain Cogn Sci, MIT, Cambridge, US4

It is known that the auditory system can recover sound sources from mixtures by detecting repeating structure embedded in the acoustic input [McDermott, et al. (2011) PNAS. 108(3):1188-93]. In the present series of experiments we aim to investigate whether this repetition-based sound segregation requires selective attention to sounds or whether it can occur partially outside the focus of attention. We adapted the original paradigm of McDermott et al. to a dual task design in which participants are required to perform a difficult 'decoy' task concurrent with presentation of the sequence of sound mixtures. Trials consist of ten sound mixtures, each composed of a repeating target and a distractor that changes from mixture to mixture. Participants judge whether a subsequently presented probe appeared in the sound mixtures. Concurrently, participants are presented with 'decoy' visual (Exp1, 2) or auditory (Exp3) stimuli. We manipulate the attentional load of the decoy task by instructing participants to perform 'low load' and 'high load' tasks on these signals. In Experiment 1 we use a rapid sequence of serially presented visual stimuli (digits). In the 'low load' condition listeners are required to report whether an underline appeared with one of digits. In the 'high load' condition, they are required to memorize the digit order. In Experiment 2, we use a multiple object tracking visual task, which is attentionally demanding but does not rely on echoic memory. In Experiment 3, we employ a decoy auditory task in which listeners are required to count rapidly presented tones. This study is ongoing. The result of experiment 1 demonstrate that a concurrent, high load, visual task results in similar performance levels to those measured during a low load task. Our results suggest that listeners can segregate novel sounds from mixtures when attention is directed elsewhere, providing support for repetition-based segregation as automatic, bottom-up process.
P1-1-123
周期性を持つ複雑な音に対するマーモセット外側ベルト領域ニューロンの神経応答
Neuronal responses to periodic complex sounds in the auditory lateral belt regions of marmoset monkeys

○坂野拓1, 鈴木航1, 宮川尚久1, 一戸紀孝1
○Taku Banno1, Wataru Suzuki1, Naohisa Miyakawa1, Noritaka Ichinohe1
独立行政法人国立精神・神経医療研究センター 神経研究所1
National Institute of Neuroscience, National Center of Neurology and Psychiatry, Tokyo1

Common marmosets (Callithrix jacchus) have various call types that enables them to communicate with other individuals in colonies. Among their vocal repertoire, trill, phee, and twitter calls can be considered as contact calls and are most frequently used call types in their daily lives. Elucidating neuronal representation of these calls is thus important for understanding neuronal mechanism of vocal communication in marmosets. All of these contact calls are tonal and periodical, indicating that we can synthesize sounds very close to these calls by simple sinusoidal amplitude and frequency modulation of pure tones with appropriate carrier frequencies. We created a stimulus set of amplitude- and/or frequency-modulated tones whose modulation frequency was parametrically varied, containing sounds that had modulation parameters matched with typical marmoset contact calls. We also presented real contact calls and examined neuronal responses to these sounds in auditory lateral belt regions of marmosets. Neurons in the anterior lateral belt responded relatively strongly to amplitude- or frequency-modulated tones. These responses to the sinusoidally modulated tones were phasic but did not show rhythmic activity synchronized with the modulation frequency of the stimulus. More complex sounds combining both amplitude and frequency modulation could activate cells, but simple linear summation of responses to the sinusoidally modulated tones failed to predict the response pattern to the complex tones. Furthermore, we found real marmoset contact calls effectively activate cells, but the responses to artificial voices and real marmoset voices did not necessarily correspond well. These results suggest that neurons in the auditory lateral belt nonlinearly compute combinations of sinusoidal amplitude and frequency modulations and also respond to complex features of the sounds that cannot be explained by a simple combination of periodic modulations.
P1-1-124
ラット聴覚皮質において発散・収束する言語音処理経路
Divergent and convergent pathways for speech sound processing in the rat auditory cortex

○小川剛1, 工藤雅治1
○Go Ogawa1, Masaharu Kudoh1
帝京大・医・生理1
Dept Physiol, Teikyo Univ Sch Med1

It is considered that sensory information is processed in parallel and serial pathways in cortices. However, information processing in the auditory cortex is unknown. Rats can discriminate between synthetic vowels with multiple formants. Characteristics of discrimination of synthetic vowels in rats are similar to those of vowel recognition in humans. Therefore, the rat is thought to be an animal model to research on the mechanism in speech sound discrimination. We investigated responses to speech sounds in the rat auditory cortex using endogenous flavoprotein fluorescence imaging (excitation: 450-490 nm, emission: 500-550 nm). The auditory cortex of rats consists of more than five fields. Synthetic formants and vowels evoked marked responses in the anterior auditory field (AAF) and dorsal auditory field (DAF) in addition to the primary auditory cortex (AI). AAF responded markedly to formants with fast waveform envelopes, while AI responses depended on spectrum of sounds. In contrast, synthetic fricative consonants evoked prominent responses in AI and ventral auditory field (VAF). Thus, auditory fields respond distinctively to different kind of speech sounds. We also examined functional connections of auditory fields by electrical stimulation applied to the auditory cortex. Stimulation to AAF elicited fluorescence responses in DAF and ventral part of AAF (anteroventral auditory field, AVAF), while that to AI evoked responses in DAF, VAF, posterior auditory field (PAF) and AVAF. These findings indicate that there are divergent and convergent pathways. Simultaneous electrical stimulation to AI and AAF evoked facilitated fluorescence responses in DAF than those evoked by individual stimulation to AI and AAF. It is suggested that DAF receives convergent inputs from both AAF and AI and integrates temporal and spectral information analyzed in AAF and AI, respectively.
P1-1-125
運動関連情報の手がかり間および周波数間相似性に対する感度
Sensitivities to coherence of motion-direction information between cues and frequencies

○古川茂人1, 西田鶴代1,2, 近藤公久1, 筧一彦2
○Shigeto Furukawa1, Tsuruyo Nishida1,2, Tadahisa Kondo1, Kazuhiko Kakehi2
NTT・CS研1, 中京大・情報科学2
NTT Comm Sci Labs, NTT Corp, Atsugi1, Grad Sch Comp Cog Sci, Chukyo Univ, Toyota2

The motion of a sound source on the horizontal plane is represented as modulations of acoustical cues such as interaural time and level differences (ITD and ILD). Location percepts of auditory objects are results of integration of information across the cues and frequency channels. This presentation reviews human psychophysical experiments that were conducted to estimate upper temporal limits for processing auditory-motion information at levels of processing stages. The first experiment measured the detectability of simultaneous modulations in ITD and ILD with various relative phases. The carrier stimulus was a bandpass-filtered noise centered at 500 Hz. For modulation rates up to 10-20 Hz, the detectability varied with the relative phase, indicating that the processes at or below the stage where the two cues are integrated preserve information about motion direction at those rates. The second experiment measured thresholds for detecting sinusoidal ITD modulations imposed on two carrier tones at 400 and 700 Hz. The modulations were either in-phase or anti-phase between the carriers. There was no significant difference between the two phase conditions in threshold even at very low modulation rate (1 Hz). The third experiment examined the listener's discrimination performance between the in- and anti-phase modulations with relatively large depths (600 μs) and slow rates (1 Hz). Generally, the performance was barely above the chance level. The results of the second and the third experiments indicate that the auditory system is poor at comparing the directions of ITD modulations across frequencies even at slow rates. The results of the three experiments imply that in the human auditory system, the process for across-frequency integration of ITD information is located at a higher level along stages of auditory processing than the process for across-cue integration.
P1-1-126
音源定位の神経回路と計算機シュミレーション
Detection of sound localization and computer simulation

○福井巌1, 大森治紀1
○Iwao Fukui1, Harunori Ohmori1
京都大学大学院 医学研究科 神経生物学1
Dept Physiol & Neurol, Univ of Kyoto, Kyoto1

Sounds are coded as a spike train of auditory nerve fiber (ANF) and various features of sounds are extracted in the cochlear nuclei of the brainstem. Inbirds, ANF innervates two cochlear nuclei; nucleus magnocellularis (NM) andnucleus angularis. NM is specialized for the transmission of sound phase information bilaterally to the nucleus laminaris (NL), where the interaural time difference (ITD) is first processed. The axonal projection from the contralateral NM to the NL is longer than that from the ipsilateral NM. The bestITD of NL neurons are between 0-400 μs, which corresponds to the sound souse located in from the center to the contralateral side. We compared the characteristic frequency (CF) of ipsilateral NM neuron and contralateral NM neuron that project to the same NL neuron. The CF of contralateral NM inputs were consistently higher than that of ipsilateral NM inputs. Because of the cochlear delay, high CF NM units fire temporally faster than lower CF NM units. These results indicate the cochlear delay have important role in the ITDprocessing. Forthermore, by using computer simulation, we will discuss the detail and advantages of ITD detection using the different CFs.
P1-1-127
生体マウスの下丘表層における音刺激に対する神経活動:カルシウムイメージングによる解析
Sound evoked activity of cells in the superficial layer of the mouse inferior collicular cortex: an in vivo calcium imaging study

○廣瀬潤一1, 伊藤哲史2,3, 村瀬一之1,3, 池田弘1,3
○Junichi Hirose1, Tetsufumi Ito2,3, Kazuyuki Murase1,3, Hiroshi Ikeda1,3
福井大院・工・知能システム1, 福井大・医・人体解剖学・神経科学領域2, 福井大・生命科学複合研究育成センター3
Dept. of Human and Artificial Intelligence Systems, Grad. Sch. of Engineering, Univ. of Fukui1, Dept. of Anatomy, Faculty of Medical Sci., Univ. of Fukui2, Res. and Education Program for Life Sci., Univ. of Fukui3

In the auditory ascending pathway, all auditory information from both ears is converged in the inferior colliculus. Accordingly, it is very likely that the inferior colliculus plays an important role for the integration of the auditory information. Anatomically, the superficial layer of the inferior collicular cortex (SLICC) forms an interesting neural circuit since it receives strong descending projection from the secondary auditory area in the cerebral cortex as well as ascending projection, suggesting that this area is involved in prediction and selection of sound. However, the physiological characteristics, especially for sound stimuli, of neurons in SLICC have not been elucidated. It appears that in vivo functional imaging is one of the best methods for studying the characteristics of neurons in SLICC since it is very difficult to record from neurons lying on the surface of the brain with electrode. However, the methods for in vivo functional imaging of the mouse inferior colliculus have not been established. In this study, we examined conditions required for in vivo calcium imaging in SLICC, and investigated details of sound-evoked activity of the surface neurons. Oregon Green 488 BAPTA-1 AM, a fluorescent calcium indicator, was injected into SLICC. To record activity of multiple cells simultaneously, nipkow disk confocal laser microscope was employed. Tone burst of several steps of frequencies and amplitudes and wide-band noise were delivered to the ear canal. As a result, sound-evoked activity of multiple cells in SLICC was successfully obtained. In the same field of view, cells exhibited different response to the same sound stimuli. We also examined the existence of the functional map in SLICC from the distribution of the cells having the same optimal sound stimuli condition. We demonstrated that it is possible to study characteristics of multiple neurons simultaneously in SLICC.
P1-1-128
仔の音声に対する母ラット聴覚皮質反応の可塑的変化
Plastic changes in responses to pup calls in the auditory cortex of mother rats

○工藤雅治1, 西田陽子1
○Masaharu Kudoh1, Yoko Nishida1
帝京大・医・生理1
Dept Physiol, Teikyo Univ Sch Med, Tokyo, Japan1

Rodent pups emit lower frequency calls other than isolation induced ultrasonic calls. We have reported that low frequency pup calls of rats can be divided into three groups: harmonic-type calls, noise-type calls and pulse trains. Harmonic-type calls resemble the harmonic structure of human vowels, while noise-type calls are wide band noises similar to fricative consonants. Rat pups frequently utter low frequency calls when they are with their mother, indicating that pup calls are an important factor in maintaining a close mother-pup relationship. Endogenous flavoprotein fluorescence imaging (excitation: 450-490 nm, emission: 500-550 nm) conducted in mother rats after 3 weeks of weaning period and nulliparous control rats shows that harmonic calls evoke clear fluorescence responses in the primary auditory cortex (AI). In contrast, synthetic noise-type calls evoke marked responses in the posterior part of AI and posterior auditory field (PAF). In the present study, we investigated plastic changes in auditory cortical responses to low frequency pup calls in mother rats. Fluorescence responses of AI to harmonic calls and those to artificial harmonic sounds (a fundamental and 2nd and 3rd overtones) were not significantly different in nulliparous rats. However, fluorescence responses to harmonic calls were greater than those to artificial harmonic sounds in mother rats, indicating AI of mother rats responds selectively to harmonic pup calls. Synthetic noise calls evoked stronger fluorescence responses in PAF of mother rats than nulliparous rats. These findings suggest that plastic changes in the auditory cortex play an important role in eliciting maternal behavior.
P1-1-129
二光子機能イメージングによるマウス一次聴覚野和音処理機構の解析
Harmonic sound processing in mouse primary auditory cortex revealed by in vivo two-photon calcium imaging

○塚野浩明1, 菱田竜一1, 澁木克栄1
○Hiroaki Tsukano1, Ryuichi Hishida1, Katsuei Shibuki1
新潟大・脳研・生理1
Dept of Neurophysiol, Brain Res Inst, Niigata Univ1

Pitch is perceived at fundamental frequency (f0), even when spectral energy at f0 is missing. We have found that responses to f0 were detected in AI as fluorescence increases: harmonic sounds of simultaneously presented 20 kHz and 25 kHz, which produced missing f0 at 5 kHz, activated the AI 5 kHz area, while inharmonic sounds of simultaneously presented 19 kHz and 26 kHz did not. We have some evidences suggesting that the f0 responses are produced by intracortical circuits from the AI 20 kHz and 25 kHz areas to the 5 kHz area. First, photo-inactivation of the 20 kHz and 25 kHz areas selectively suppressed the f0 responses in the 5 kHz area, while the tonal responses to 5 kHz in the same area were unchanged. Second, the onset of the f0 responses was significantly delayed compared with that of the tonal responses to 5 kHz in the same area. Third, mosaic sounds of alternately repeated 20 kHz and 25 kHz with 15 ms intervals produced the f0 responses in the AI 5 kHz area. Superimposed 20 kHz and 25 kHz sounds might yield any beats at 5 kHz, which could be detected in subcortical structures. However, the mosaic sounds produce no 5 kHz beat. To investigate the mechanisms underlying the f0 responses at single neuronal level, we visualized activities of neurons stained with fura-2 in the supragranular layers of AI using two-photon microscopy. In the AI 5 kHz area, the overall calcium signals recorded in neuron to harmonic sounds at 20 kHz and 25 kHz or the mosaic sounds were larger than those to pure tones at 20 kHz or 25 kHz. Correlations of neuronal activity patterns between single tones (5 vs. 20 kHz, 5 vs. 25 kHz, 20 vs. 25 kHz) were not significant. However the pattern of neural activities to mosaic sound was significantly correlated with 5 kHz, but had little correlation with 20 kHz or 25 kHz. Taken together, these data clearly indicate that the f0 responses are produced by intracortical circuits in AI.
P1-1-130
メスキンカチョウ海馬体におけるマルチカラーイメージングによる神経細胞構築と神経活動の解析
Multicolor neuron imaging for investigating the cytoarchitecture and neural activity in hippocampal formation of female Zebra Finch

○國井貴之1, 山本真千子1, 山下正芳1, 堀田耕司1, 岡浩太郎1
○Takayuki Kunii1, Machiko Yamamoto1, Masayoshi Yamashita1, Kohji Hotta1, Kotaro Oka1
慶應義塾大学 理工学研究科 基礎理工学専攻1
Department of Biosciences and Informatics, Faculty of Science and Technology, KEIO UNIVERSITY1

Zebra Finch is one of a model organism to investigate what happened when they communicate with their own songs. Above all, we have focused on the function of hippocampal formation during song perception. Previous research shows that, female zebra finch uses hippocampal formation for analyzing the higher order of auditory information processing. With immunostaining of hippocampal formation, zenk, that is one of the immediate early genes, is expressed with activity-dependent manner in neurons when female zebra finch is exposed to male's songs. Especially with using typical type of male songs, zenk is expressed strongly and their locations are overlapped with serotonin (5-HT) immunostaining. Although we found serotonin would be the key player for song information processing, we have not clarified that hippocampal formation has a relationship with auditory pathway. We, therefore, visualized the cytoarchitecture on neurons and their neural activity in hippocampal formation of female zebra finch with the combination of several fluorescent imaging techniques.
Lipophilic dye is a useful tool to identify the cytoarchitecture of neurons. After immunostaining of zenk and 5-HT at hippocampal formation, we delivered gold particles coated with lipophilic dye to the same slices with Gene Gun. This method enables us to visualize the cytoarchitecture of neurons, expression of zenk, and 5-HT distribution, simultaneously. According to this method, we found the typical shape of neurons localized in each region of hippocampal formation. HCm and HCl have many bipolar-type neurons, and they have the connection to multipolar-type or oval-type neurons at APH. We also found that there is a connection with bipolar-type neurons containing 5-HT which express zenk between CF region in hippocampal formation and TrSM which belongs to mesencephalon.
We anticipated this imaging method gives us important information about the key role of hippocampal formation as song perception.
P1-1-131
内耳におけるNGL-2コンディショナルノックアウトが聴覚反応を消失させる
Cell-type-specific knockout of NGL-2 in inner ear disrupts auditory response

○増田明1, 後藤大道1, チョウキ1, 矢口邦雄1, 西村-穐吉幸子1, 糸原重美1
○Akira Masuda1, Hiromichi Goto1, Qi Zhang1, Kunio Yaguchi1, Sachiko Nishimura-Akiyoshi1, Shigeyoshi Itohara1
理化学研究所 脳総合研究センター 行動遺伝学技術開発チーム1
Lab Behav Gene, RIKEN BSI, Saitama1

NGL-2 is a transmembrane protein which specifically interacts to netrin-G2 in synaptic sites. The expression of these molecules localizes in specific cell-types in the inner ear organs, and deletions of NGL-2 or netrin-G2 result in loss of auditory startle reflux and abnormal auditory brainstem responses. However, the causal mechanism underlying auditory abnormalities is not clear. NGL-2 and netrin-G2 are widespread across central nervous system including several sensory organs and the brain. We hypothesized that specific population of NGL-2/netrin-G2-expressing cells in the inner ear contributes to the regular auditory responses. To identify cell-types which are responsible for this auditory dysfunction, we generated conditional NGL-2 gene knockout mice which express Cre recombinase in specific cell-types in inner ear. The auditory startle reflux paradigm clearly showed that the auditory responses were abolished in the mutants. The behavior of mutants was comparable to the controls in general activity, anxiety, spatial memory, object memory, and motor ability. To compare the role of NGL-2 against other post-synaptic proteins in the same cell-type, we also generated conditional knockout of NR1 gene, an essential subunit of NMDA receptor, which also expresses in inner ear. They showed very weak but significant reduction of auditory startle responses. These data suggest that the NGL-2 proteins in specific cell-types act important roles in the initial stage of auditory processing.
P1-1-132
ラット聴皮質における音列抽出に関わる神経活動
Neural correlates of perceptual grouping of tone sequences in rat auditory cortex

○雨宮知樹1, 野田貴大2, 白松(磯口)知世1, 神崎亮平1,2, 高橋宏知1,2,3
○Tomoki Amemiya1, Takahiro Noda2, Tomoyo Shiramatsu-Isoguchi1, Ryohei Kanzaki1,2, Hirokazu Takahashi1,2,3
東京大院・情報理工・知能機械1, 東京大学 先端科学技術研究センター2, JSTさきがけ3
Grad Sch Informat Sci and Technol, Univ of Tokyo, Tokyo1, Adv Sci and Technol Res Ctr, Univ of Tokyo, Tokyo2, JST PRESTO3

We can easily distinguish a repetitive sound pattern from background sounds without paying attention. As is well known in auditory stream segregation or verbal transformation, repetitive sound patterns induce our perceptual change or the build-up of sound objects. Such perceptual organization may be caused by neuronal dynamics dependent on sound history. Especially, complex combination of local and global sound characteristics modulates neuronal activities in auditory cortex. However, the way regularity or presentation probability of sounds affect neuronal activities is poorly understood. Here we investigated the neural correlates of perceptual organization by presenting repetitive tone patterns or random tones under background random tones. While tone sequences were presented which consist of particular sequences of 4 pure tones (target tones) and random sequences of pure tones (masker tones), we recorded local field potentials with a microelectrode array from the auditory cortex of anesthetized rat. We analyzed tone-evoked responses affected by both long-term components, which are derived from some dozen times of tonal sequence repetition during more than one hundred seconds, and short-term components, which are derived from local spectro-temporal properties of each tone pattern. Until now, we observed that the response magnitude depended on the local components of the tone sequences. Although the averaged response across each tone sequence decreased over time, the relative relationship between each tone-evoked response within the target tones did not significantly change. On the other hand, variation of evoked response to the repetitive target tones tended to increase over time as compared to random tones as a control condition. Such response variety in auditory cortex is possible neural correlates of auditory object perception.
P1-1-133
酸化ストレスを介した蝸牛らせん靭帯繊維細胞におけるギャップ結合機能破綻へのカルパインの関与
Involvement of calpain in dysfunction of gap junction in the cochlear spiral ligament fibrocytes following oxidative stress

○山口太郎1, 米山雅紀1, 荻田喜代一1
○Taro Yamaguchi1, Masanori Yoneyama1, Kiyokazu Ogita1
摂南大学 薬学部 薬理学研究室1
Dept Pharmacol, Univ of Setsunan, Osaka1

It is well known that noise-induced hearing loss results mainly from enhancement of oxidative stress. In particular, oxidative stress is thought to produce hair cell death in organ of Corti and dysfunction of lateral wall structure. Accumulating evidence indicates that gap junction (GJ) pathway in cochlear spiral ligament fibrocytes (SLFs) of lateral wall plays a critical role in hearing system. Intense noise-induced hearing loss results from formation of the major lipid peroxidation product 4-hydroxynonenal (4-HNE) by oxidative stress in the SLFs. Our previous study showed that 4-HNE had the ability to produce dysfunction of GJ with disappearance of membranous connexin43 (Cx43). The purpose of this study was to elucidate mechanism underlying 4-HNE-induced dysfunction of GJ in the SLFs. Adult male Std-ddY mice were exposed to 8 kHz octave band noise of 110 dB SPL for 1 h. The noise exposure produced a dramatic threshold shift at frequencies of 4, 12, and 20 kHz. Homogenates were prepared from the lateral wall at different time points after noise exposure for determination of 4-HNE adducted protein level by immunoblot analysis. Immediately-post exposure, a marked elevation in the level of 4-HNE adducted proteins was seen in the lateral wall. SLFs were isolated from 5 weeks mice and then were put on the dishes with DMEM supplemented 10% FBS at 37 °C in a 5% CO2/95% air-humidified incubator. After culturing for 10 DIV, cells obtained were replated and cultured for 12 DIV to use as SLFs. Exposure to 4-HNE produced a decrease in Cx43, GJ dysfunction, and an increase in α-fodrin fragment at 145 kDa derived from calpain activation. PD150606 (calpain inhibitor) abolished the dysfunction of GJ and the decrease in Cx43 after 4-HNE exposure. These results suggest that calpain is involved in oxidative stress-induced dysfunction of GJ in the cochlear spiral ligament.
P1-1-134
聴性脳幹反応と音脈分凝の相関
Auditory brainstem response correlates with auditory streaming

○山岸慎平1, 芦原孝典1, 大塚翔2, 古川茂人3, 柏野牧夫1,3
○Shimpei Yamagishi1, Takanori Ashihara1, Sho Otsuka2, Shigeto Furukawa3, Makio Kashino1,3
東京工業大学大学院 総合理工学研究科 物理情報システム専攻1, 東京大学 新領域創成科学研究科 環境学専攻2, NTT コミュニケーション科学基礎研究所3
Department of Information Processing, Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology, Kanagawa, Japan1, Department of Human and Engineered Environmental Studies, Graduate School of Frontier Sciences, The University of Tokyo, Chiba, Japan2, NTT Communication Science Laboratories3

Auditory scene analysis critically depends on listeners' ability to parse complex acoustic inputs into meaningful perceptual objects, or auditory streams. Neural correlates of auditory streaming have been reported both in subcortical and cortical sites. However, convincing evidence for neural correlates of streaming in the human brainstem is still limited. Here, we examined the correlation between the auditory brainstem frequency-following response (FFR) and behavioral reports of perceived streaming. The stimulus was a repeated triplet-tone sequence (ABA-...; A: 315-Hz tone, B: 400-Hz tone, -: silence). A prolonged exposure to a repeated ABA- pattern may evoke perceptual bistability between one-stream (1S) and two-stream (2S) percepts. Fifteen adults participated in the experiment. The experiment consisted of 48 sessions. In each session, participants were presented with 200 repetitions of ABA- triplets. While listening, they pressed a button corresponding to 1S or 2S percepts whenever they experienced perceptual switching. At the same time, FFR was recorded using a vertical one channel electrode montage. The recorded FFR was segmented at every onset of ABA- triplets and classified according to behavioral reports (1S or 2S). The results were evaluated in terms of the averaged FFR amplitude and phase locking value (PLV) of FFR at the stimulus frequencies. To derive PLV of a given tone, first, FFR for each tone presentation was expressed as an unit vector on the complex plane. Then, PLV was determined as the length of the vector average across presentations. The averaged FFR amplitude and PLV to the second A tone (A2) in the triplet were significantly smaller for 1S than 2S percepts. For other tones, no significant difference was observed between 1S and 2S percepts. These results demonstrated significant differences in the FFR according to perceived streaming, indicating neural correlates of auditory streaming in the human brainstem.
P1-1-135
ニホンザルの変化させたクーコールによる個体識別
Identification of individuality by morphed coo-calls in Japanese macaques

○古山貴文1, 小林耕太2, 力丸裕2
○Takafumi Furuyama1, Kohta I Kobayasi2, Hiroshi Riquimaroux2
同志社大学大学院 生命医科学研究科 医工・医情報学専攻1, ニューロセンシング・バイオナビゲーション研究センター2
Sensory and Cognitive Neural System Laboratory, Graduate School of Life and Medical Sciences, Doshisha University1, Neurosensing and Bionavigation Research Center, Doshisha University Kyotanabe, Kyoto, Japan2

Individual identification is necessary for vocal communication in monkeys. Macaques utter the Smooth Early High (SEH), one type of coo-calls, for greeting and locating other individuals. Japanese macaques can identify individuality only by hearing the voices of other conspecifics. The purpose of this study was to investigate acoustical features for monkeys in caller identification. Two male subjects were trained to discriminate SEHs of macaque A (SEHa) from those of B (SEHb) (those calls were recorded from unfamiliar monkeys for subjects). The subject had to depress a lever to begin a trial. Either SEHa (S+) or SEHb (S-) was presented after repetitive presentation of S- (3-5 times). If S+ was presented (GO) as discriminative stimulus, the monkey had to release the lever within 800 ms after offset of S+ (Hit). If the subject failed to release the lever within 800 ms, a "miss" was scored. If S- continued (NOGO), the animal had to keep depressing the lever longer than 800 ms after offset of S- (Correct rejection: CR). If the monkey released the lever during S- trials, a "false alarm" was recorded. Hit was reinforced with about 1 ml of orange juice. Test stimuli were SEH of macaque A (SEHa) and that of B (SEHb) (both SEHs were never used for training sessions) and series of morphed SEHs synthesized between SEHa and SEHb by using a software, STRAIGHT (which produces pitch-independent spectral envelopes). As results, reaction times (RTs) to unfamiliar SEHa were significantly shorter than unfamiliar SEHb. Then we created the series of morphed SEHs (from S- to S+) along with one acoustic parameters (pitch or vocal-tract properties), while the other parameter stayed constant as that of SEHb (S-). Monkey's RTs to morphed stimulus shortened as the distance from SEHb (S-) increased in both dimensions. These results suggested that both fundamental frequency and vocal-tract properties were important for Japanese macaques to identify individuality.
P1-1-136
聴覚関連領域からの下行性投射系の形態学的解析
Morphological analysis of the descending projection from the auditory related cortices

○瀧公介1, 相見良成1, 井之口文月1, 木村智子1, 西村美紀1, 本間智1, 工藤基1
○Kousuke Taki1, Yoshinari Aimi1, Fuduki Inoguchi1, Tomoko Kimura1, Miki Nishimura1, Satoru Homma1, Motoi Kudo1
滋賀医大・医・解剖1
Dept Anatomy, Shiga Univ of Medical Science, Otsu, Japan1

The mammalian auditory system involves corticofugal projection system which is presumed to transmit top-down information such as attention or context, but its working principle is still stay unclear because their precise connection is left to be revealed. The inferior cortex (IC) is the auditory brainstem where bottom-up and top-down information are congested. We successfully visualized cortico-collicular projection exclusively by infection of a recombinant Sindbis virus into projection neurons in the auditory-related cortices, and it is expected to enables progress in morphological analysis because of outstanding labeling ability of the recombinant virus and unique labeling pattern. The Sindbis virus which expresses palmitoilation-site-added GFP as the reporter gene is a kind gift from Dr. Kaneko in Kyoto-university. The projected axon terminal images from the postdorsal auditory area was observed in the nucleus of brachium of IC(NBIC), external nucleus of IC, central nucleus of IC and deep layer of the superior colliculus, which have been mentioned to be related with spatial recognition. Their axon terminals are well visualized and compared with There are particularly abundant fibers labeled in NBIC, and we tried to make further molphological analysis on them. Tyramide amplification technique enables observation of fluorescence-labeled fiber and double or triple immunofluoscent study with various popuration of the IC neurons. We tried to visualize spatial correlation between the projecting IC neuron and the labeled cortico-collicular axon terminals.

上部に戻る 前に戻る